Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
Radiology ; 305(2): 454-465, 2022 11.
Article in English | MEDLINE | ID: covidwho-1950321

ABSTRACT

Background Developing deep learning models for radiology requires large data sets and substantial computational resources. Data set size limitations can be further exacerbated by distribution shifts, such as rapid changes in patient populations and standard of care during the COVID-19 pandemic. A common partial mitigation is transfer learning by pretraining a "generic network" on a large nonmedical data set and then fine-tuning on a task-specific radiology data set. Purpose To reduce data set size requirements for chest radiography deep learning models by using an advanced machine learning approach (supervised contrastive [SupCon] learning) to generate chest radiography networks. Materials and Methods SupCon helped generate chest radiography networks from 821 544 chest radiographs from India and the United States. The chest radiography networks were used as a starting point for further machine learning model development for 10 prediction tasks (eg, airspace opacity, fracture, tuberculosis, and COVID-19 outcomes) by using five data sets comprising 684 955 chest radiographs from India, the United States, and China. Three model development setups were tested (linear classifier, nonlinear classifier, and fine-tuning the full network) with different data set sizes from eight to 85. Results Across a majority of tasks, compared with transfer learning from a nonmedical data set, SupCon reduced label requirements up to 688-fold and improved the area under the receiver operating characteristic curve (AUC) at matching data set sizes. At the extreme low-data regimen, training small nonlinear models by using only 45 chest radiographs yielded an AUC of 0.95 (noninferior to radiologist performance) in classifying microbiology-confirmed tuberculosis in external validation. At a more moderate data regimen, training small nonlinear models by using only 528 chest radiographs yielded an AUC of 0.75 in predicting severe COVID-19 outcomes. Conclusion Supervised contrastive learning enabled performance comparable to state-of-the-art deep learning models in multiple clinical tasks by using as few as 45 images and is a promising method for predictive modeling with use of small data sets and for predicting outcomes in shifting patient populations. © RSNA, 2022 Online supplemental material is available for this article.


Subject(s)
COVID-19 , Deep Learning , Humans , Radiography, Thoracic/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Pandemics , COVID-19/diagnostic imaging , Retrospective Studies , Radiography , Machine Learning
2.
Sci Rep ; 12(1): 8922, 2022 05 26.
Article in English | MEDLINE | ID: covidwho-1864771

ABSTRACT

The outbreak of COVID-19, since its appearance, has affected about 200 countries and endangered millions of lives. COVID-19 is extremely contagious disease, and it can quickly incapacitate the healthcare systems if infected cases are not handled timely. Several Conventional Neural Networks (CNN) based techniques have been developed to diagnose the COVID-19. These techniques require a large, labelled dataset to train the algorithm fully, but there are not too many labelled datasets. To mitigate this problem and facilitate the diagnosis of COVID-19, we developed a self-attention transformer-based approach having self-attention mechanism using CT slices. The architecture of transformer can exploit the ample unlabelled datasets using pre-training. The paper aims to compare the performances of self-attention transformer-based approach with CNN and Ensemble classifiers for diagnosis of COVID-19 using binary Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection and multi-class Hybrid-learning for UnbiaSed predicTion of COVID-19 (HUST-19) CT scan dataset. To perform this comparison, we have tested Deep learning-based classifiers and ensemble classifiers with proposed approach using CT scan images. Proposed approach is more effective in detection of COVID-19 with an accuracy of 99.7% on multi-class HUST-19, whereas 98% on binary class SARS-CoV-2 dataset. Cross corpus evaluation achieves accuracy of 93% by training the model with Hust19 dataset and testing using Brazilian COVID dataset.


Subject(s)
COVID-19 , Algorithms , COVID-19/diagnosis , Humans , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , SARS-CoV-2
3.
PLoS One ; 17(3): e0263916, 2022.
Article in English | MEDLINE | ID: covidwho-1742004

ABSTRACT

OBJECTIVES: Ground-glass opacity (GGO)-a hazy, gray appearing density on computed tomography (CT) of lungs-is one of the hallmark features of SARS-CoV-2 in COVID-19 patients. This AI-driven study is focused on segmentation, morphology, and distribution patterns of GGOs. METHOD: We use an AI-driven unsupervised machine learning approach called PointNet++ to detect and quantify GGOs in CT scans of COVID-19 patients and to assess the severity of the disease. We have conducted our study on the "MosMedData", which contains CT lung scans of 1110 patients with or without COVID-19 infections. We quantify the morphologies of GGOs using Minkowski tensors and compute the abnormality score of individual regions of segmented lung and GGOs. RESULTS: PointNet++ detects GGOs with the highest evaluation accuracy (98%), average class accuracy (95%), and intersection over union (92%) using only a fraction of 3D data. On average, the shapes of GGOs in the COVID-19 datasets deviate from sphericity by 15% and anisotropies in GGOs are dominated by dipole and hexapole components. These anisotropies may help to quantitatively delineate GGOs of COVID-19 from other lung diseases. CONCLUSION: The PointNet++ and the Minkowski tensor based morphological approach together with abnormality analysis will provide radiologists and clinicians with a valuable set of tools when interpreting CT lung scans of COVID-19 patients. Implementation would be particularly useful in countries severely devastated by COVID-19 such as India, where the number of cases has outstripped available resources creating delays or even breakdowns in patient care. This AI-driven approach synthesizes both the unique GGO distribution pattern and severity of the disease to allow for more efficient diagnosis, triaging and conservation of limited resources.


Subject(s)
COVID-19/diagnostic imaging , Lung/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Artificial Intelligence , COVID-19/pathology , Female , Humans , India , Lung/diagnostic imaging , Male , Patient Acuity , Retrospective Studies , Tomography, X-Ray Computed/methods , Unsupervised Machine Learning
4.
Sci Rep ; 11(1): 24065, 2021 12 15.
Article in English | MEDLINE | ID: covidwho-1585806

ABSTRACT

COVID-19 is a respiratory disease that causes infection in both lungs and the upper respiratory tract. The World Health Organization (WHO) has declared it a global pandemic because of its rapid spread across the globe. The most common way for COVID-19 diagnosis is real-time reverse transcription-polymerase chain reaction (RT-PCR) which takes a significant amount of time to get the result. Computer based medical image analysis is more beneficial for the diagnosis of such disease as it can give better results in less time. Computed Tomography (CT) scans are used to monitor lung diseases including COVID-19. In this work, a hybrid model for COVID-19 detection has developed which has two key stages. In the first stage, we have fine-tuned the parameters of the pre-trained convolutional neural networks (CNNs) to extract some features from the COVID-19 affected lungs. As pre-trained CNNs, we have used two standard CNNs namely, GoogleNet and ResNet18. Then, we have proposed a hybrid meta-heuristic feature selection (FS) algorithm, named as Manta Ray Foraging based Golden Ratio Optimizer (MRFGRO) to select the most significant feature subset. The proposed model is implemented over three publicly available datasets, namely, COVID-CT dataset, SARS-COV-2 dataset, and MOSMED dataset, and attains state-of-the-art classification accuracies of 99.15%, 99.42% and 95.57% respectively. Obtained results confirm that the proposed approach is quite efficient when compared to the local texture descriptors used for COVID-19 detection from chest CT-scan images.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , COVID-19 Testing/methods , Deep Learning , Heuristics , Humans , Neural Networks, Computer , Tomography, X-Ray Computed
5.
Sci Rep ; 11(1): 23914, 2021 12 13.
Article in English | MEDLINE | ID: covidwho-1569278

ABSTRACT

Chest X-ray (CXR) images have been one of the important diagnosis tools used in the COVID-19 disease diagnosis. Deep learning (DL)-based methods have been used heavily to analyze these images. Compared to other DL-based methods, the bag of deep visual words-based method (BoDVW) proposed recently is shown to be a prominent representation of CXR images for their better discriminability. However, single-scale BoDVW features are insufficient to capture the detailed semantic information of the infected regions in the lungs as the resolution of such images varies in real application. In this paper, we propose a new multi-scale bag of deep visual words (MBoDVW) features, which exploits three different scales of the 4th pooling layer's output feature map achieved from VGG-16 model. For MBoDVW-based features, we perform the Convolution with Max pooling operation over the 4th pooling layer using three different kernels: [Formula: see text], [Formula: see text], and [Formula: see text]. We evaluate our proposed features with the Support Vector Machine (SVM) classification algorithm on four CXR public datasets (CD1, CD2, CD3, and CD4) with over 5000 CXR images. Experimental results show that our method produces stable and prominent classification accuracy (84.37%, 88.88%, 90.29%, and 83.65% on CD1, CD2, CD3, and CD4, respectively).


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Databases, Factual , Deep Learning , Humans , Support Vector Machine
6.
J Comput Assist Tomogr ; 45(6): 970-978, 2021.
Article in English | MEDLINE | ID: covidwho-1440699

ABSTRACT

OBJECTIVE: To quantitatively evaluate computed tomography (CT) parameters of coronavirus disease 2019 (COVID-19) pneumonia an artificial intelligence (AI)-based software in different clinical severity groups during the disease course. METHODS: From March 11 to April 15, 2020, 51 patients (age, 18-84 years; 28 men) diagnosed and hospitalized with COVID-19 pneumonia with a total of 116 CT scans were enrolled in the study. Patients were divided into mild (n = 12), moderate (n = 31), and severe (n = 8) groups based on clinical severity. An AI-based quantitative CT analysis, including lung volume, opacity score, opacity volume, percentage of opacity, and mean lung density, was performed in initial and follow-up CTs obtained at different time points. Receiver operating characteristic analysis was performed to find the diagnostic ability of quantitative CT parameters for discriminating severe from nonsevere pneumonia. RESULTS: In baseline assessment, the severe group had significantly higher opacity score, opacity volume, higher percentage of opacity, and higher mean lung density than the moderate group (all P ≤ 0.001). Through consecutive time points, the severe group had a significant decrease in lung volume (P = 0.006), a significant increase in total opacity score (P = 0.003), and percentage of opacity (P = 0.007). A significant increase in total opacity score was also observed for the mild group (P = 0.011). Residual opacities were observed in all groups. The involvement of more than 4 lobes (sensitivity, 100%; specificity, 65.26%), total opacity score greater than 4 (sensitivity, 100%; specificity, 64.21), total opacity volume greater than 337.4 mL (sensitivity, 80.95%; specificity, 84.21%), percentage of opacity greater than 11% (sensitivity, 80.95%; specificity, 88.42%), total high opacity volume greater than 10.5 mL (sensitivity, 95.24%; specificity, 66.32%), percentage of high opacity greater than 0.8% (sensitivity, 85.71%; specificity, 80.00%) and mean lung density HU greater than -705 HU (sensitivity, 57.14%; specificity, 90.53%) were related to severe pneumonia. CONCLUSIONS: An AI-based quantitative CT analysis is an objective tool in demonstrating disease severity and can also assist the clinician in follow-up by providing information about the disease course and prognosis according to different clinical severity groups.


Subject(s)
Artificial Intelligence , COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Adolescent , Adult , Aged , Aged, 80 and over , Evaluation Studies as Topic , Female , Humans , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , SARS-CoV-2 , Sensitivity and Specificity , Severity of Illness Index , Time , Young Adult
7.
Sci Rep ; 11(1): 18478, 2021 09 16.
Article in English | MEDLINE | ID: covidwho-1415957

ABSTRACT

With the presence of novel coronavirus disease at the end of 2019, several approaches were proposed to help physicians detect the disease, such as using deep learning to recognize lung involvement based on the pattern of pneumonia. These approaches rely on analyzing the CT images and exploring the COVID-19 pathologies in the lung. Most of the successful methods are based on the deep learning technique, which is state-of-the-art. Nevertheless, the big drawback of the deep approaches is their need for many samples, which is not always possible. This work proposes a combined deep architecture that benefits both employed architectures of DenseNet and CapsNet. To more generalize the deep model, we propose a regularization term with much fewer parameters. The network convergence significantly improved, especially when the number of training data is small. We also propose a novel Cost-sensitive loss function for imbalanced data that makes our model feasible for the condition with a limited number of positive data. Our novelties make our approach more intelligent and potent in real-world situations with imbalanced data, popular in hospitals. We analyzed our approach on two publicly available datasets, HUST and COVID-CT, with different protocols. In the first protocol of HUST, we followed the original paper setup and outperformed it. With the second protocol of HUST, we show our approach superiority concerning imbalanced data. Finally, with three different validations of the COVID-CT, we provide evaluations in the presence of a low number of data along with a comparison with state-of-the-art.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Deep Learning , Early Diagnosis , Humans , Neural Networks, Computer , Tomography, X-Ray Computed
8.
Sci Rep ; 11(1): 15523, 2021 09 01.
Article in English | MEDLINE | ID: covidwho-1392879

ABSTRACT

Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to detect every possible condition by building multiple separate systems, each of which detects one or more pre-specified conditions. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For training and tuning the system, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system trained using a large dataset containing a diverse array of CXR abnormalities generalizes to new patient populations and unseen diseases. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist. Lastly, to facilitate the continued development of AI models for CXR, we release our collected labels for the publicly available dataset.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tuberculosis/diagnostic imaging , Adult , Aged , Algorithms , Case-Control Studies , China , Deep Learning , Female , Humans , India , Male , Middle Aged , Radiography, Thoracic , United States
9.
Sci Rep ; 11(1): 17318, 2021 08 27.
Article in English | MEDLINE | ID: covidwho-1376210

ABSTRACT

Among the most leading causes of mortality across the globe are infectious diseases which have cost tremendous lives with the latest being coronavirus (COVID-19) that has become the most recent challenging issue. The extreme nature of this infectious virus and its ability to spread without control has made it mandatory to find an efficient auto-diagnosis system to assist the people who work in touch with the patients. As fuzzy logic is considered a powerful technique for modeling vagueness in medical practice, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was proposed in this paper as a key rule for automatic COVID-19 detection from chest X-ray images based on the characteristics derived by texture analysis using gray level co-occurrence matrix (GLCM) technique. Unlike the proposed method, especially deep learning-based approaches, the proposed ANFIS-based method can work on small datasets. The results were promising performance accuracy, and compared with the other state-of-the-art techniques, the proposed method gives the same performance as the deep learning with complex architectures using many backbone.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Deep Learning , Early Diagnosis , Fuzzy Logic , Humans , Radiography
10.
Comput Med Imaging Graph ; 92: 101957, 2021 09.
Article in English | MEDLINE | ID: covidwho-1330724

ABSTRACT

Lung cancer is one of the most common and deadly malignant cancers. Accurate lung tumor segmentation from CT is therefore very important for correct diagnosis and treatment planning. The automated lung tumor segmentation is challenging due to the high variance in appearance and shape of the targeting tumors. To overcome the challenge, we present an effective 3D U-Net equipped with ResNet architecture and a two-pathway deep supervision mechanism to increase the network's capacity for learning richer representations of lung tumors from global and local perspectives. Extensive experiments on two real medical datasets: the lung CT dataset from Liaoning Cancer Hospital in China with 220 cases and the public dataset of TCIA with 422 cases. Our experiments demonstrate that our model achieves an average dice score (0.675), sensitivity (0.731) and F1-score (0.682) on the dataset from Liaoning Cancer Hospital, and an average dice score (0.691), sensitivity (0.746) and F1-score (0.724) on the TCIA dataset, respectively. The results demonstrate that the proposed 3D MSDS-UNet outperforms the state-of-the-art segmentation models for segmenting all scales of tumors, especially for small tumors. Moreover, we evaluated our proposed MSDS-UNet on another challenging volumetric medical image segmentation task: COVID-19 lung infection segmentation, which shows consistent improvement in the segmentation performance.


Subject(s)
COVID-19/diagnostic imaging , Imaging, Three-Dimensional , Lung Neoplasms/diagnostic imaging , Pneumonia, Viral/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Supervised Machine Learning , Tomography, X-Ray Computed , China , Humans , Pneumonia, Viral/virology , SARS-CoV-2
11.
IEEE J Biomed Health Inform ; 25(7): 2363-2373, 2021 07.
Article in English | MEDLINE | ID: covidwho-1328981

ABSTRACT

COVID-19 pneumonia is a disease that causes an existential health crisis in many people by directly affecting and damaging lung cells. The segmentation of infected areas from computed tomography (CT) images can be used to assist and provide useful information for COVID-19 diagnosis. Although several deep learning-based segmentation methods have been proposed for COVID-19 segmentation and have achieved state-of-the-art results, the segmentation accuracy is still not high enough (approximately 85%) due to the variations of COVID-19 infected areas (such as shape and size variations) and the similarities between COVID-19 and non-COVID-infected areas. To improve the segmentation accuracy of COVID-19 infected areas, we propose an interactive attention refinement network (Attention RefNet). The interactive attention refinement network can be connected with any segmentation network and trained with the segmentation network in an end-to-end fashion. We propose a skip connection attention module to improve the important features in both segmentation and refinement networks and a seed point module to enhance the important seeds (positions) for interactive refinement. The effectiveness of the proposed method was demonstrated on public datasets (COVID-19CTSeg and MICCAI) and our private multicenter dataset. The segmentation accuracy was improved to more than 90%. We also confirmed the generalizability of the proposed network on our multicenter dataset. The proposed method can still achieve high segmentation accuracy.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Databases, Factual , Humans , Lung/diagnostic imaging
12.
IEEE J Biomed Health Inform ; 25(7): 2376-2387, 2021 07.
Article in English | MEDLINE | ID: covidwho-1328979

ABSTRACT

Researchers seek help from deep learning methods to alleviate the enormous burden of reading radiological images by clinicians during the COVID-19 pandemic. However, clinicians are often reluctant to trust deep models due to their black-box characteristics. To automatically differentiate COVID-19 and community-acquired pneumonia from healthy lungs in radiographic imaging, we propose an explainable attention-transfer classification model based on the knowledge distillation network structure. The attention transfer direction always goes from the teacher network to the student network. Firstly, the teacher network extracts global features and concentrates on the infection regions to generate attention maps. It uses a deformable attention module to strengthen the response of infection regions and to suppress noise in irrelevant regions with an expanded reception field. Secondly, an image fusion module combines attention knowledge transferred from teacher network to student network with the essential information in original input. While the teacher network focuses on global features, the student branch focuses on irregularly shaped lesion regions to learn discriminative features. Lastly, we conduct extensive experiments on public chest X-ray and CT datasets to demonstrate the explainability of the proposed architecture in diagnosing COVID-19.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Algorithms , Humans , Lung/diagnostic imaging , SARS-CoV-2
13.
J Healthc Eng ; 2021: 5513679, 2021.
Article in English | MEDLINE | ID: covidwho-1286755

ABSTRACT

The world is experiencing an unprecedented crisis due to the coronavirus disease (COVID-19) outbreak that has affected nearly 216 countries and territories across the globe. Since the pandemic outbreak, there is a growing interest in computational model-based diagnostic technologies to support the screening and diagnosis of COVID-19 cases using medical imaging such as chest X-ray (CXR) scans. It is discovered in initial studies that patients infected with COVID-19 show abnormalities in their CXR images that represent specific radiological patterns. Still, detection of these patterns is challenging and time-consuming even for skilled radiologists. In this study, we propose a novel convolutional neural network- (CNN-) based deep learning fusion framework using the transfer learning concept where parameters (weights) from different models are combined into a single model to extract features from images which are then fed to a custom classifier for prediction. We use gradient-weighted class activation mapping to visualize the infected areas of CXR images. Furthermore, we provide feature representation through visualization to gain a deeper understanding of the class separability of the studied models with respect to COVID-19 detection. Cross-validation studies are used to assess the performance of the proposed models using open-access datasets containing healthy and both COVID-19 and other pneumonia infected CXR images. Evaluation results show that the best performing fusion model can attain a classification accuracy of 95.49% with a high level of sensitivity and specificity.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Sensitivity and Specificity
14.
J Healthc Eng ; 2021: 6658058, 2021.
Article in English | MEDLINE | ID: covidwho-1277017

ABSTRACT

The COVID-19 pandemic has a significant negative effect on people's health, as well as on the world's economy. Polymerase chain reaction (PCR) is one of the main tests used to detect COVID-19 infection. However, it is expensive, time-consuming, and lacks sufficient accuracy. In recent years, convolutional neural networks have grabbed many researchers' attention in the machine learning field, due to its high diagnosis accuracy, especially the medical image recognition. Many architectures such as Inception, ResNet, DenseNet, and VGG16 have been proposed and gained an excellent performance at a low computational cost. Moreover, in a way to accelerate the training of these traditional architectures, residual connections are combined with inception architecture. Therefore, many hybrid architectures such as Inception-ResNetV2 are further introduced. This paper proposes an enhanced Inception-ResNetV2 deep learning model that can diagnose chest X-ray (CXR) scans with high accuracy. Besides, a Grad-CAM algorithm is used to enhance the visualization of the infected regions of the lungs in CXR images. Compared with state-of-the-art methods, our proposed paper proves superiority in terms of accuracy, recall, precision, and F1-measure.


Subject(s)
COVID-19/diagnosis , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , SARS-CoV-2 , Algorithms , Diagnosis, Differential , Humans , Lung/diagnostic imaging , Pneumonia, Viral/diagnostic imaging
15.
J Healthc Eng ; 2021: 8869372, 2021.
Article in English | MEDLINE | ID: covidwho-1221672

ABSTRACT

The rapid worldwide spread of the COVID-19 pandemic has infected patients around the world in a short space of time. Chest computed tomography (CT) images of patients who are infected with COVID-19 can offer early diagnosis and efficient forecast monitoring at a low cost. The diagnosis of COVID-19 on CT in an automated way can speed up many tasks and the application of medical treatments. This can help complement reverse transcription-polymerase chain reaction (RT-PCR) diagnosis. The aim of this work is to develop a system that automatically identifies ground-glass opacity (GGO) and pulmonary infiltrates (PIs) on CT images from patients with COVID-19. The purpose is to assess the disease progression during the patient's follow-up assessment and evaluation. We propose an efficient methodology that incorporates oversegmentation mean shift followed by superpixel-SLIC (simple linear iterative clustering) algorithm on CT images with COVID-19 for pulmonary parenchyma segmentation. To identify the pulmonary parenchyma, we described each superpixel cluster according to its position, grey intensity, second-order texture, and spatial-context-saliency features to classify by a tree random forest (TRF). Second, by applying the watershed segmentation to the mean-shift clusters, only pulmonary parenchyma segmentation-identified zones showed GGO and PI based on the description of each watershed cluster of its position, grey intensity, gradient entropy, second-order texture, Euclidean position to the border region of the PI zone, and global saliency features, after using TRF. Our classification results for pulmonary parenchyma identification on CT images with COVID-19 had a precision of over 92% and recall of over 92% on twofold cross validation. For GGO, the PI identification showed 96% precision and 96% recall on twofold cross validation.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , SARS-CoV-2 , Tomography, X-Ray Computed/methods , Algorithms , COVID-19/classification , COVID-19/pathology , Databases, Factual , Deep Learning , Disease Progression , Early Diagnosis , Follow-Up Studies , Humans , Lung/pathology , Pandemics , Radiographic Image Interpretation, Computer-Assisted/statistics & numerical data , Severity of Illness Index , Software , Tomography, X-Ray Computed/statistics & numerical data
16.
J Thorac Imaging ; 35(6): 361-368, 2020 Nov 01.
Article in English | MEDLINE | ID: covidwho-1219262

ABSTRACT

OBJECTIVE: This study aimed to use the radiomics signatures of a machine learning-based tool to evaluate the prognosis of patients with coronavirus disease 2019 (COVID-19) infection. METHODS: The clinical and imaging data of 64 patients with confirmed diagnoses of COVID-19 were retrospectively selected and divided into a stable group and a progressive group according to the data obtained from the ongoing treatment process. Imaging features from whole-lung images from baseline computed tomography (CT) scans were extracted and dimensionality reduction was performed. Support vector machines were used to construct radiomics signatures and to compare differences between the 2 groups. We also compared the differences of signature scores in the clinical, laboratory, and CT image feature subgroups and finally analyzed the correlation between the radiomics features of the constructed signature and the other features including clinical, laboratory, and CT imaging features. RESULTS: The signature has a good classification effect for the stable group and the progressive group, with area under curve, sensitivity, and specificity of 0.833, 80.95%, and 74.42%, respectively. Signature score differences in laboratory and CT imaging features between subgroups were not statistically significant (P>0.05); cough was negatively correlated with GLCM Entropy_angle 90_offset4 (r=-0.578), but was positively correlated with ShortRunEmphhasis_AllDirect_offset4_SD (r=0.454); C-reactive protein was positively correlated with Cluster Prominence_ AllDirect_offset 4_ SD (r=0.47). CONCLUSION: The radiomics signature of the whole lung based on machine learning may reveal the changes of lung microstructure in the early stage and help to indicate the progression of the disease.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Machine Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Diagnosis, Differential , Disease Progression , Female , Humans , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , SARS-CoV-2 , Sensitivity and Specificity , Severity of Illness Index
17.
J Healthc Eng ; 2021: 5528441, 2021.
Article in English | MEDLINE | ID: covidwho-1211612

ABSTRACT

Novel coronavirus pneumonia (NCP) has become a global pandemic disease, and computed tomography-based (CT) image analysis and recognition are one of the important tools for clinical diagnosis. In order to assist medical personnel to achieve an efficient and fast diagnosis of patients with new coronavirus pneumonia, this paper proposes an assisted diagnosis algorithm based on ensemble deep learning. The method combines the Stacked Generalization ensemble learning with the VGG16 deep learning to form a cascade classifier, and the information constituting the cascade classifier comes from multiple subsets of the training set, each of which is used to collect deviant information about the generalization behavior of the data set, such that this deviant information fills the cascade classifier. The algorithm was experimentally validated for classifying patients with novel coronavirus pneumonia, patients with common pneumonia (CP), and normal controls, and the algorithm achieved a prediction accuracy of 93.57%, sensitivity of 94.21%, specificity of 93.93%, precision of 89.40%, and F1-score of 91.74% for the three categories. The results show that the method proposed in this paper has good classification performance and can significantly improve the performance of deep neural networks for multicategory prediction tasks.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed , Algorithms , Databases, Factual , Humans , Pandemics , Radiography, Thoracic , SARS-CoV-2 , Sensitivity and Specificity , Tomography, X-Ray Computed/classification , Tomography, X-Ray Computed/methods
18.
IEEE J Biomed Health Inform ; 25(7): 2353-2362, 2021 07.
Article in English | MEDLINE | ID: covidwho-1203809

ABSTRACT

OBJECTIVE: Coronavirus disease 2019 (COVID-19) has caused considerable morbidity and mortality, especially in patients with underlying health conditions. A precise prognostic tool to identify poor outcomes among such cases is desperately needed. METHODS: Total 400 COVID-19 patients with underlying health conditions were retrospectively recruited from 4 centers, including 54 dead cases (labeled as poor outcomes) and 346 patients discharged or hospitalized for at least 7 days since initial CT scan. Patients were allocated to a training set (n = 271), a test set (n = 68), and an external test set (n = 61). We proposed an initial CT-derived hybrid model by combining a 3D-ResNet10 based deep learning model and a quantitative 3D radiomics model to predict the probability of COVID-19 patients reaching poor outcome. The model performance was assessed by area under the receiver operating characteristic curve (AUC), survival analysis, and subgroup analysis. RESULTS: The hybrid model achieved AUCs of 0.876 (95% confidence interval: 0.752-0.999) and 0.864 (0.766-0.962) in test and external test sets, outperforming other models. The survival analysis verified the hybrid model as a significant risk factor for mortality (hazard ratio, 2.049 [1.462-2.871], P < 0.001) that could well stratify patients into high-risk and low-risk of reaching poor outcomes (P < 0.001). CONCLUSION: The hybrid model that combined deep learning and radiomics could accurately identify poor outcomes in COVID-19 patients with underlying health conditions from initial CT scans. The great risk stratification ability could help alert risk of death and allow for timely surveillance plans.


Subject(s)
COVID-19 , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Aged , Aged, 80 and over , COVID-19/diagnostic imaging , COVID-19/mortality , Comorbidity , Female , Humans , Imaging, Three-Dimensional , Lung/diagnostic imaging , Male , Middle Aged , Prognosis , ROC Curve , Retrospective Studies , SARS-CoV-2
19.
Sci Rep ; 11(1): 8602, 2021 04 21.
Article in English | MEDLINE | ID: covidwho-1196850

ABSTRACT

COVID-19 spread across the globe at an immense rate and has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-rays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction tests. Supervised deep learning models such as convolutional neural networks need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.


Subject(s)
COVID-19/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Humans , ROC Curve , Supervised Machine Learning
20.
Lancet Digit Health ; 3(6): e340-e348, 2021 06.
Article in English | MEDLINE | ID: covidwho-1193002

ABSTRACT

BACKGROUND: Acute respiratory distress syndrome (ARDS) is a common, but under-recognised, critical illness syndrome associated with high mortality. An important factor in its under-recognition is the variability in chest radiograph interpretation for ARDS. We sought to train a deep convolutional neural network (CNN) to detect ARDS findings on chest radiographs. METHODS: CNNs were pretrained on 595 506 radiographs from two centres to identify common chest findings (eg, opacity and effusion), and then trained on 8072 radiographs annotated for ARDS by multiple physicians using various transfer learning approaches. The best performing CNN was tested on chest radiographs in an internal and external cohort, including a subset reviewed by six physicians, including a chest radiologist and physicians trained in intensive care medicine. Chest radiograph data were acquired from four US hospitals. FINDINGS: In an internal test set of 1560 chest radiographs from 455 patients with acute hypoxaemic respiratory failure, a CNN could detect ARDS with an area under the receiver operator characteristics curve (AUROC) of 0·92 (95% CI 0·89-0·94). In the subgroup of 413 images reviewed by at least six physicians, its AUROC was 0·93 (95% CI 0·88-0·96), sensitivity 83·0% (95% CI 74·0-91·1), and specificity 88·3% (95% CI 83·1-92·8). Among images with zero of six ARDS annotations (n=155), the median CNN probability was 11%, with six (4%) assigned a probability above 50%. Among images with six of six ARDS annotations (n=27), the median CNN probability was 91%, with two (7%) assigned a probability below 50%. In an external cohort of 958 chest radiographs from 431 patients with sepsis, the AUROC was 0·88 (95% CI 0·85-0·91). When radiographs annotated as equivocal were excluded, the AUROC was 0·93 (0·92-0·95). INTERPRETATION: A CNN can be trained to achieve expert physician-level performance in ARDS detection on chest radiographs. Further research is needed to evaluate the use of these algorithms to support real-time identification of ARDS patients to ensure fidelity with evidence-based care or to support ongoing ARDS research. FUNDING: National Institutes of Health, Department of Defense, and Department of Veterans Affairs.


Subject(s)
Deep Learning , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic , Respiratory Distress Syndrome/diagnosis , Aged , Algorithms , Area Under Curve , Datasets as Topic , Female , Hospitals , Humans , Lung/diagnostic imaging , Lung/pathology , Male , Middle Aged , Pleural Cavity/diagnostic imaging , Pleural Cavity/pathology , Pleural Diseases , Radiography , Respiratory Distress Syndrome/diagnostic imaging , Retrospective Studies , United States
SELECTION OF CITATIONS
SEARCH DETAIL